Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add filters

Database
Language
Document Type
Year range
1.
Comput Math Organ Theory ; 27(2): 179-194, 2021.
Article in English | MEDLINE | ID: covidwho-1202526

ABSTRACT

The 2020 coronavirus pandemic has heightened the need to flag coronavirus-related misinformation, and fact-checking groups have taken to verifying misinformation on the Internet. We explore stories reported by fact-checking groups PolitiFact, Poynter and Snopes from January to June 2020. We characterise these stories into six clusters, then analyse temporal trends of story validity and the level of agreement across sites. The sites present the same stories 78% of the time, with the highest agreement between Poynter and PolitiFact. We further break down the story clusters into more granular story types by proposing a unique automated method, which can be used to classify diverse story sources in both fact-checked stories and tweets. Our results show story type classification performs best when trained on the same medium, with contextualised BERT vector representations outperforming a Bag-Of-Words classifier.

2.
IEEE Internet Comput ; 25(2): 84-91, 2021 Mar.
Article in English | MEDLINE | ID: covidwho-1255035

ABSTRACT

We analyze a Singapore-based COVID-19 Telegram group with more than 10000 participants. First, we study the group's opinion over time, focusing on five dimensions: participation, sentiment, negative emotions, topics, and message types. We find that participation peaked when the Ministry of Health raised the disease alert level, but this engagement was not sustained. Second, we investigate the prevalence of, and reactions to, authority-identified misinformation in the group. We find that authority-identified misinformation is rare, and that participants affirm, deny, and question misinformation. Third, we explore searching for user skepticism as one strategy for identifying misinformation, finding misinformation not previously identified by authorities.

3.
Comput Math Organ Theory ; 27(3): 324-342, 2021.
Article in English | MEDLINE | ID: covidwho-1220492

ABSTRACT

Digital disinformation presents a challenging problem for democracies worldwide, especially in times of crisis like the COVID-19 pandemic. In countries like Singapore, legislative efforts to quell fake news constitute relatively new and understudied contexts for understanding local information operations. This paper presents a social cybersecurity analysis of the 2020 Singaporean elections, which took place at the height of the pandemic and after the recent passage of an anti-fake news law. Harnessing a dataset of 240,000 tweets about the elections, we found that 26.99% of participating accounts were likely to be bots, responsible for a larger proportion of bot tweets than the election in 2015. Textual analysis further showed that the detected bots used simpler and more abusive second-person language, as well as hashtags related to COVID-19 and voter activity-pointing to aggressive tactics potentially fuelling online hostility and questioning the legitimacy of the polls. Finally, bots were associated with larger, less dense, and less echo chamber-like communities, suggesting efforts to participate in larger, mainstream conversations. However, despite their distinct narrative and network maneuvers, bots generally did not hold significant influence throughout the social network. Hence, although intersecting concerns of political conflict during a global pandemic may promptly raise the possibility of online interference, we quantify both the efforts and limits of bot-fueled disinformation in the 2020 Singaporean elections. We conclude with several implications for digital disinformation in times of crisis, in the Asia-Pacific and beyond.

SELECTION OF CITATIONS
SEARCH DETAIL